Gaussian Elimination with Randomized Complete Pivoting

نویسندگان

  • Christopher Melgaard
  • Ming Gu
چکیده

Gaussian elimination with partial pivoting (GEPP) has long been among the most widely used methods for computing the LU factorization of a given matrix. However, this method is also known to fail for matrices that induce large element growth during the factorization process. In this paper, we propose a new scheme, Gaussian elimination with randomized complete pivoting (GERCP) for the efficient and reliable LU factorization of a given matrix. GERCP satisfies GECP (Gaussian elimination with complete pivoting) style element growth bounds with high probability, yet costs only marginally higher than GEPP. Our numerical experimental results strongly suggest that GERCP is as reliable as GECP and as efficient as GEPP for computing the LU factorization. 1. Background and Motivation. Solving linear systems of equations Ax = b, (1.1) where A ∈ Rn×n and x,b ∈ R, is a fundamental problem in numerical linear algebra and scientific computing. Gaussian Elimination with Partial Pivoting (GEPP) solves this problem by computing the LU factorization of A and is typically efficient and reliable. Over the years, GEPP has been repeatedly re-designed and re-implemented for better performance, and is the backbone for generations of mathematical software packages, including LINPACK [6], LAPACK [1], PLAPACK, SCALAPACK, PLASMA and MAGMA. GEPP routines in today’s mathematical software libraries such as the Intel mkl are capable of solving linear systems of equations with tens of thousands of variables at or near the peak of the machine’s speed. Efficiency aside, an equally important consideration is numerical reliability. While algorithms for solving eigenvalue problems have become significantly more stable over the years, GEPP was known to be, and remains, a method that is mostly stable in practice but unstable for many well-known matrices including some from common integral equations and differential equations applications [8, 30]. Pivoting plays a crucial role in the reliability of Gaussian elimination (GE), which is tied to element growth within the LU factorization process. The most naive version of GE, Gaussian elimination without pivoting (GENP), does not perform any pivoting and only requires 23n 3 +O(n) floating point operations with no entry comparisons [4]. However, this method can suffer from uncontrolled element growth and is only known to be reliable in a few instances like diagonally dominant matrices among others. The most popular version of GE is GEPP, which limits element growth to at most exponential by swapping the rows of A (i.e., partial pivoting) during elimination, and is numerically stable on average. The additional cost, about 12n 2 entry comparisons and the associated data movement, is typically a small fraction of the total GE cost. The most reliable version of GE is Gaussian elimination with complete pivoting (GECP), which swaps both rows and columns for sub-exponential element growth [28] and is universally believed to be always backward stable in practice [4]. However, GECP is prohibitively slow with 13n 3 +O(n) entry comparisons and relatively little memory reuse [4, 14]. Rook pivoting [9, 21, 24] is an attempt to speedup complete pivoting while maintaining the guarantee of sub-exponential element growth. Rook pivoting is part of the LUSOL package [10] for sparse LU factorization. Despite having better performance in the “average” case, there are many matrices that still require O(n) entry comparisons in the worst case, providing a negligible speedup over complete pivoting [14]. In this paper, we propose a novel pivoting scheme called Gaussian elimination with randomized complete pivoting (GERCP). We show that GERCP satisfies a stability condition similar to that of complete pivoting, suggesting that these methods share similar stability properties. Yet, we also ∗Department of Mathematics, University of California, Berkeley. †Department of Mathematics, University of California, Berkeley. This research was supported in part by NSF Award CCF-1319312. 1 ar X iv :1 51 1. 08 52 8v 1 [ m at h. N A ] 2 6 N ov 2 01 5 demonstrate that GERCP is comparable to GEPP in computational overhead. Our numerical experimental results strongly suggest that GERCP is a numerically stable and computationally efficient alternative to GEPP. Randomization has been used to fix the numerical instability of GEPP in the literature, through GE on the product of random matrices and A to avoid catastrophically bad pivots. These methods are known to work well in practice in general, but they still lack effective control on element growth, and can be much less accurate than GEPP. In Section 2, we introduce the necessary notation and background for the paper. In Section 3, we introduce GERCP and state/prove some important properties. In section 4, we talk about numerical experiements and implementations of GERCP. The appendix has results needed by the proofs in section 3. 2. The Setup and Background. In this paper, we consider Gaussian elimination on an invertible square matrix A ∈ Rn×n, although our algorithms and analysis carry over to the cases of singular matrices and rectangular matrices with few modifications. 2.1. Notation. We will follow the familiar slight abuse of notation from scientific computing and numerical linear algebra, mimicking the way that LAPACK overwrites the input matrix with the L and U factors. The diagonal of A becomes the diagonal of U because the diagonal of L is always 1 and thus does not need to be stored. Algorithm 2.1. Classical Gaussian Elimination in Matlab Notation Input: n× n matrix A Output: lower triangular L with unit diagonal, upper triangular U , row permutation Πr, column permutation Πc. set A = A for k = 1, · · · , n− 1 do (i.e. called k stage of LU) 1. select column pivot (INSERT PIVOTING RULE). 2. swap (UPDATE A AND Πc WITH PIVOT DECISION). 3. select row pivot (INSERT PIVOTING RULE). 4. swap (UPDATE A AND Πr WITH PIVOT DECISION). 5. compute A(k + 1 : n, k) = A(k + 1 : n, k)/A(k, k); 6. compute A(k+ 1 : n, k+ 1 : n) = A(k+ 1 : n, k+ 1 : n)−A(k+ 1 : n, k) ∗A(k, k+ 1 : n); Remark 2.1. While the working matrix A has been overwritten in Algorithm 2.1, in our subsequent discussions we will refer L and U as the triangular matrices stored in A and still refer A as the original input matrix. We will use Ak ∈ Rn×n to refer explicitly to the working matrix before the k stage of the outer most loop. Thus, An refers to the working matrix after the algorithm terminates, i.e. after the (n− 1) stage of the outer loop. Remark 2.2. For easy of discussion, we have written Algorithm 2.1 in such a way that, for each k, it performs possible column pivoting before any possible row pivoting. GENP, GEPP, GECP and rook pivoting can all be written in this form. For any appropriate dimension m, we denote ei ∈ R to be the i standard basis vector, i.e. a vector with all entries equal to 0 except for the i entry which equals 1; we also denote e ∈ R to be the vector with all entries equal to 1. Any permutation matrix Π ∈ Rn×n is a square matrix with exactly one entry equal to 1 in each row and column, and all other entries equal to 0. We refer to the permutation induced by Π as π : {1, · · · , n} → {1, · · · , n} in the sense that π(i) = j if and only if Πei = ej . We will commonly make use of the swap or 2-cycle permutation given by π(i,j) or Π(i,j) in 2 Common examples of Matlab notation for 1 ≤ i ≤ p ≤ m and 1 ≤ j ≤ q ≤ n Notation Pivoted Notation Dimensions Description B(:, :) B(π1(:), π2(:)) Rn×n Entire matrix B or Π1BΠ2 resp. B(i, :) B(π1(i), π2(:)) row vector in R i row of B or Π1BΠ2 resp. B(:, j) B(π1(:), π2(j)) column vector in R j column of B or Π1BΠ2 resp. B(i, j : q) B(π1(i), π2(j : q)) row vector in Rq−j+1 j through q entries of i row of B or Π1BΠ T 2 resp. B(i : p, j) B(π1(i : p), π2(j)) column vector in Rp−i+1 i through p entries of j column of B or Π1BΠ T 2 resp. B(i : p, j : q) B(π1(i : p), π2(j : q)) R(p−i+1)×(q−j+1) Submatrix from intersection i through p rows and j through q columns of B or Π1BΠ T 2 resp. Fig. 2.1. Table of Matlab notations matrix form defined by π(i,j)(i) = j, π(i,j)(j) = i and π(i,j)(k) = k, for all k 6= i, j We denote the final row and column permutations of an algorithm as Πr and Πc respectively. At the k stage of LU, Algorithm 2.1 will swap the k column with the α k column and the k th row with the β k row. As a result, we can write Πc = Π(n−1,αn−1) · · ·Π(2,α2)Π(1,α1), Πr = Π(n−1,βn−1) · · ·Π(2,β2)Π(1,β1) as a product of the individual column/row swaps. Furthermore, we define the next notation to give us the first k − 1 swaps and the last n− k swaps Πc,k = Π(k−1,αk−1) · · ·Π(2,α2)Π(1,α1) Πc,−k = Π(n−1,αn−1) · · ·Π(k+1,αk+1)Π(k,αk) Also, we will use the analogous definition for Πr,k and Πr,−k. In Figure 2.1, we describe the use of the MATLAB colon notation in combination with the permutations above to explain our rows/columns reorderings of a matrix and its select submatrices. Let πc and πr be the permutation of columns and rows performed by LU respectively. We use the following notation to refer to matrices with the final pivoting applied apriori Ac = ΠrAΠ T c = A(πr(:), πc(:)) Ac k = Πr,−kAΠ T c,−k = Ak(πr,−k(:), πc,−k(:)) We do this because all of the pivoting methods discussed in this paper are top-heavy as defined in Definition 2.10. The row pivots πr of a top-heavy pivoting strategy are deterministic given the column pivots πc applied to A by the LU factorization because each row pivot must satisfy equation (2.6). Therefore, when writing Ac , it is understood that the row pivots are the unique set of top-heavy row pivots. Schur complements form a crucial role in Gaussian elimination. We establish notation for Schur complements as Sk ∈ R(k:n)×(k:n). Notice the use of Matlab notation (k : n) × (k : n) instead of (n− k + 1)× (n− k + 1). The Schur complement Sk will act like a normal (n− k + 1)× (n− k + 1) matrix for most operations like matrix multiplication, matrix addition and ect. However, when using the Matlab notation in Figure 2.1 to access entries of Sk, we impose the abuse of notation that rows and columns are enumerated from k to n, instead of 1 to n− k + 1. For example,

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Complete pivoting strategy for the $IUL$ preconditioner obtained from Backward Factored APproximate INVerse process

‎In this paper‎, ‎we use a complete pivoting strategy to compute the IUL preconditioner obtained as the by-product of the Backward Factored APproximate INVerse process‎. ‎This pivoting is based on the complete pivoting strategy of the Backward IJK version of Gaussian Elimination process‎. ‎There is a parameter $alpha$ to control the complete pivoting process‎. ‎We have studied the effect of dif...

متن کامل

Complete pivoting strategy for the left-looking Robust Incomplete Factorization preconditioner

In this paper, we have used a complete pivoting strategy to compute the left-looking version of RIF preconditioner. This pivoting is based on the complete pivoting strategy of the IJK version of Gaussian Elimination process. There is a parameter α to control the pivoting process. To study the effect of α on the quality of the left-looking version of RIF preconditioner with complete pivoting str...

متن کامل

Stable Pivoting for the Fast Factorization of Cauchy-Like Matrices

Recent work by Sweet and Brent on the fast factorization of Cauchy-like matrices through a fast version of Gaussian elimination with partial pivoting has uncovered a potential stability problem which is not present in ordinary Gaussian elimination: excessive growth in the generators used to represent the matrix and its Schur complements can lead to large errors. A natural way to x this problem ...

متن کامل

TR-2014008: Supporting GENP and Low-Rank Approximation with Random Multipliers

We prove that standard Gaussian random multipliers are expected to stabilize numerically both Gaussian elimination with no pivoting and block Gaussian elimination and that they also are expected to support the celebrated randomized algorithm for low-rank approximation of a matrix even without customary oversampling. Our tests show similar results where we apply random circulant and Toeplitz mul...

متن کامل

Large growth factors in Gaussian elimination with pivoting

The growth factor plays an important role in the error analysis of Gaussian elimination. It is well known that when partial pivoting or complete pivoting is used the growth factor is usually small, but it can be large. The examples of large growth usually quoted involve contrived matrices that are unlikely to occur in practice. We present real and complex n n matrices arising from practical app...

متن کامل

On the Complete Pivoting Conjecture for a Hadamard Matrix of Order

This paper settles a conjecture by Day and Peterson that if Gaussian elimination with complete pivoting is performed on a 12 by 12 Hadamard matrix, then must be the (absolute) pivots. In contrast, at least 30 patterns for the absolute values of the pivots have been observed for 16 by 16 Hadamard matrices. This problem is non-trivial because row and column permutations do not preserve pivots. A ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1511.08528  شماره 

صفحات  -

تاریخ انتشار 2015